The Kolmogorov–Arnold representation theorem revisited

نویسندگان

چکیده

Abstract There is a longstanding debate whether the Kolmogorov–Arnold representation theorem can explain use of more than one hidden layer in neural networks. The decomposes multivariate function into an interior and outer therefore has indeed similar structure as network with two layers. But there are distinctive differences. One main obstacles that depends on represented be wildly varying even if smooth. We derive modifications transfer smoothness properties to well approximated by ReLU It appears instead layers, natural interpretation deep where most layers required approximate function.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Separatrix Theorem Revisited

We present a simple proof of Camacho-Sad’s Theorem on the existence of invariant analytic curves for singular holomorphic foliations in (C2, 0), avoiding the combinatorial arguments on the residues by means of a ramification.

متن کامل

The PCF Theorem Revisited

i<κ λi/I, where κ < mini<κ λi. Here we prove this theorem under weaker assumptions such as wsat(I) < mini<κ λi, where wsat(I) is the minimal θ such that κ cannot be delivered to θ sets / ∈ I (or even slightly weaker condition). We also look at the existence of exact upper bounds relative to <I (<I −eub) as well as cardinalities of reduced products and the cardinals TD(λ). Finally we apply this ...

متن کامل

The folk theorem revisited !

This paper develops a simple “instant-response” model of strategic behavior where players can react instantly to changing circumstances, but at the same time face some inertia after changing action. The framework is used to reconsider the folk theorem and, in particular, the role of the key condition of dimensionality. In contrast to the discounted case in discrete time, here low dimensionality...

متن کامل

The Conservation Theorem revisited

This paper describes a method of proving strong normalization based on an extension of the conservation theorem. We introduce a structural notion of reduction that we call βS , and we prove that any λ-term that has a βIβS-normal form is strongly β-normalizable. We show how to use this result to prove the strong normalization of different typed λ-calculi.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Networks

سال: 2021

ISSN: ['1879-2782', '0893-6080']

DOI: https://doi.org/10.1016/j.neunet.2021.01.020